Multitask Learning: A Knowledge-Based Source of Inductive Bias

نویسنده

  • Rich Caruana
چکیده

This paper suggests that it may be easier to learn several hard tasks at one time than to learn these same tasks separately. In effect, the information provided by the training signal for each task serves as a domain-specific inductive bias for the other tasks. Frequently the world gives us clusters of related tasks to learn. When it does not, it is often straightforward to create additional tasks. For many domains, acquiring inductive bias by collecting additional teaching signal may be more practical than the traditional approach of codifying domain-specific biases acquired from human expertise. We call this approach Multitask Learning (MTL). Since much of the power of an inductive learner follows directly from its inductive bias, multitask learning may yield more powerful learning. An empirical example of multitask connectionist learning is presented where learning improves by training one network on several related tasks at the same time. Multitask decision tree induction is also outlined.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Incorporating Prior Knowledge About Financial Markets Through Neural Multitask Learning

We present the systematic method of Multitask Learning for incorporating prior knowledge (hints) into the inductive learning system of neural networks. Multitask Learning is an inductive transfer method which uses domain information about related tasks as inductive bias to guide the learning process towards better solutions of the main problem. These tasks are presented to the learning system i...

متن کامل

Multitask Learning 43 1

Multitask Learning is an approach to inductive transfer that improves generalization by using the domain information contained in the training signals of related tasks as an inductive bias. It does this by learning tasks in parallel while using a shared representation; what is learned for each task can help other tasks be learned better. This paper reviews prior work on MTL, presents new eviden...

متن کامل

Algorithms and Applications for Multitask Learning

Multitask Learning is an inductive transfer method that improves generalization by using domain information implicit in the training signals of related tasks as an inductive bias. It does this by learning multiple tasks in parallel using a shared representation. Mul-titask transfer in connectionist nets has already been proven. But questions remain about how often training data for useful extra...

متن کامل

Explanation-Based Learning for Image Understanding

Existing prior domain knowledge represents a valuable source of information for image interpretation problems such as classifying handwritten characters. Such domain knowledge must be translated into a form understandable by the learner. Translation can be realized with Explanation-Based Learning (EBL) which provides a kind of dynamic inductive bias, combining domain knowledge and training exam...

متن کامل

2 MECHANISMS OF MULTITASK BACKPROPWe

Hinton 6] proposed that generalization in artiicial neural nets should improve if nets learn to represent the domain's underlying regularities. Abu-Mustafa's hints work 1] shows that the outputs of a backprop net can be used as inputs through which domain-speciic information can be given to the net. We extend these ideas by showing that a backprop net learning many related tasks at the same tim...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 1993